breast cancer detection
Analysis of Incursive Breast Cancer in Mammograms Using YOLO, Explainability, and Domain Adaptation
Adhikari, Jayan, Joshi, Prativa, Baral, Susish
Abstract--Deep learning models for breast cancer detection from mammographic images have significant reliability problems when presented with Out-of-Distribution (OOD) inputs such as other imaging modalities (CT, MRI, X-ray) or equipment variations, leading to unreliable detection and misdiagnosis. Our strategy establishes an in-domain gallery via cosine similarity to rigidly reject non-mammographic inputs prior to processing, ensuring that only domain-associated images supply the detection pipeline. The OOD detection component achieves 99.77% general accuracy with immaculate 100% accuracy on OOD test sets, effectively eliminating irrelevant imaging modalities. ResNet50 was selected as the optimum backbone after 12 CNN architecture searches. The joint framework unites OOD robustness with high detection performance (mAP@0.5: Experimental validation establishes that OOD filtering significantly improves system reliability by preventing false alarms on out-of-distribution inputs while maintaining higher detection accuracy on mammographic data. The present study offers a fundamental foundation for the deployment of reliable AI-based breast cancer detection systems in diverse clinical environments with inherent data heterogeneity. A global health concern, breast cancer is the second-highest cause of cancer related to mortality in women. It has been recorded as the most diagnosed disease in the world in 2020 [1]. According to the World Health Organization, all types of cancer account for 626700 global deaths of women, out of which the breast is the predominant and second leading cause [2]. If diagnosed in its early development stage, the survival rate are likely to be high and the treatment cost will get reduced [3]. Studies has found that 30% breast cancer are diagnosed when the size of the mass is 30mm.
- North America > United States (0.04)
- Oceania > New Zealand (0.04)
- Europe > Portugal (0.04)
- (2 more...)
MV-MLM: Bridging Multi-View Mammography and Language for Breast Cancer Diagnosis and Risk Prediction
Zheng, Shunjie-Fabian, Lee, Hyeonjun, Kooi, Thijs, Diba, Ali
Large annotated datasets are essential for training robust Computer-Aided Diagnosis (CAD) models for breast cancer detection or risk prediction. However, acquiring such datasets with fine-detailed annotation is both costly and time-consuming. Vision-Language Models (VLMs), such as CLIP, which are pre-trained on large image-text pairs, offer a promising solution by enhancing robustness and data efficiency in medical imaging tasks. This paper introduces a novel Multi-View Mammography and Language Model for breast cancer classification and risk prediction, trained on a dataset of paired mammogram images and synthetic radiology reports. Our MV-MLM leverages multi-view supervision to learn rich representations from extensive radiology data by employing cross-modal self-supervision across image-text pairs. This includes multiple views and the corresponding pseudo-radiology reports. W e propose a novel joint visual-textual learning strategy to enhance generalization and accuracy performance over different data types and tasks to distinguish breast tissues or cancer characteristics(calcification, mass) and utilize these patterns to understand mammography images and predict cancer risk. W e evaluated our method on both private and publicly available datasets, demonstrating that the proposed model achieves state-of-the-art performance in three classification tasks: (1) malignancy classification, (2) subtype classification, and (3) image-based cancer risk prediction. Furthermore, the model exhibits strong data efficiency, outperforming existing fully supervised or VLM baselines while trained on synthetic text reports and without the need for actual radiology reports.
- Health & Medicine > Therapeutic Area > Oncology > Breast Cancer (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
A Density-Informed Multimodal Artificial Intelligence Framework for Improving Breast Cancer Detection Across All Breast Densities
Kakileti, Siva Teja, Govindaraju, Bharath, Sampangi, Sudhakar, Manjunath, Geetha
Mammography, the current standard for breast cancer screening, has reduced sensitivity in women with dense breast tissue, contributing to missed or delayed diagnoses. Thermalytix, an AI-based thermal imaging modality, captures functional vascular and metabolic cues that may complement mammographic structural data. This study investigates whether a breast density-informed multi-modal AI framework can improve cancer detection by dynamically selecting the appropriate imaging modality based on breast tissue composition. A total of 324 women underwent both mammography and thermal imaging. Mammography images were analyzed using a multi-view deep learning model, while Thermalytix assessed thermal images through vascular and thermal radiomics. The proposed framework utilized Mammography AI for fatty breasts and Thermalytix AI for dense breasts, optimizing predictions based on tissue type. This multi-modal AI framework achieved a sensitivity of 94.55% (95% CI: 88.54-100) and specificity of 79.93% (95% CI: 75.14-84.71), outperforming standalone mammography AI (sensitivity 81.82%, specificity 86.25%) and Thermalytix AI (sensitivity 92.73%, specificity 75.46%). Importantly, the sensitivity of Mammography dropped significantly in dense breasts (67.86%) versus fatty breasts (96.30%), whereas Thermalytix AI maintained high and consistent sensitivity in both (92.59% and 92.86%, respectively). This demonstrates that a density-informed multi-modal AI framework can overcome key limitations of unimodal screening and deliver high performance across diverse breast compositions. The proposed framework is interpretable, low-cost, and easily deployable, offering a practical path to improving breast cancer screening outcomes in both high-resource and resource-limited settings.
- Asia > India > Karnataka > Bengaluru (0.04)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- Europe > Switzerland (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area > Oncology > Breast Cancer (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
Towards Human-AI Collaboration System for the Detection of Invasive Ductal Carcinoma in Histopathology Images
Han, Shuo, Eldaly, Ahmed Karam, Oyelere, Solomon Sunday
Invasive ductal carcinoma (IDC) is the most prevalent form of breast cancer, and early, accurate diagnosis is critical to improving patient survival rates by guiding treatment decisions. Combining medical expertise with artificial intelligence (AI) holds significant promise for enhancing the precision and efficiency of IDC detection. In this work, we propose a human-in-the-loop (HITL) deep learning system designed to detect IDC in histopathology images. The system begins with an initial diagnosis provided by a high-performance EfficientNetV2S model, offering feedback from AI to the human expert. Medical professionals then review the AI-generated results, correct any misclassified images, and integrate the revised labels into the training dataset, forming a feedback loop from the human back to the AI. This iterative process refines the model's performance over time. The EfficientNetV2S model itself achieves state-of-the-art performance compared to existing methods in the literature, with an overall accuracy of 93.65\%. Incorporating the human-in-the-loop system further improves the model's accuracy using four experimental groups with misclassified images. These results demonstrate the potential of this collaborative approach to enhance AI performance in diagnostic systems. This work contributes to advancing automated, efficient, and highly accurate methods for IDC detection through human-AI collaboration, offering a promising direction for future AI-assisted medical diagnostics.
- Asia > Singapore (0.04)
- Africa > South Africa > Gauteng > Johannesburg (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
- Research Report > New Finding (0.48)
- Research Report > Strength Medium (0.34)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Enhancing Breast Cancer Detection with Vision Transformers and Graph Neural Networks
Cai, Yeming, Li, Zhenglin, Wang, Yang
Breast cancer is a leading cause of death among women globally, and early detection is critical for improving survival rates. This paper introduces an innovative framework that integrates Vision Transformers (ViT) and Graph Neural Networks (GNN) to enhance breast cancer detection using the CBIS-DDSM dataset. Our framework leverages ViT's ability to capture global image features and GNN's strength in modeling structural relationships, achieving an accuracy of 84.2%, outperforming traditional methods. Additionally, interpretable attention heatmaps provide insights into the model's decision-making process, aiding radiologists in clinical settings.
- Asia > China > Hubei Province > Wuhan (0.05)
- North America > United States > Texas > Brazos County > College Station (0.04)
- Asia > Japan (0.04)
Breast Cancer Detection from Multi-View Screening Mammograms with Visual Prompt Tuning
Accurate detection of breast cancer from high-resolution mammograms is crucial for early diagnosis and effective treatment planning. Previous studies have shown the potential of using single-view mammograms for breast cancer detection. However, incorporating multi-view data can provide more comprehensive insights. Multi-view classification, especially in medical imaging, presents unique challenges, particularly when dealing with large-scale, high-resolution data. In this work, we propose a novel Multi-view Visual Prompt Tuning Network (MVPT-NET) for analyzing multiple screening mammograms. We first pretrain a robust single-view classification model on high-resolution mammograms and then innovatively adapt multi-view feature learning into a task-specific prompt tuning process. This technique selectively tunes a minimal set of trainable parameters (7\%) while retaining the robustness of the pre-trained single-view model, enabling efficient integration of multi-view data without the need for aggressive downsampling. Our approach offers an efficient alternative to traditional feature fusion methods, providing a more robust, scalable, and efficient solution for high-resolution mammogram analysis. Experimental results on a large multi-institution dataset demonstrate that our method outperforms conventional approaches while maintaining detection efficiency, achieving an AUROC of 0.852 for distinguishing between Benign, DCIS, and Invasive classes. This work highlights the potential of MVPT-NET for medical imaging tasks and provides a scalable solution for integrating multi-view data in breast cancer detection.
- Europe > United Kingdom (0.29)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > France > Grand Est > Bas-Rhin > Strasbourg (0.04)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Breast Cancer (0.94)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)
Analysis of Transferred Pre-Trained Deep Convolution Neural Networks in Breast Masses Recognition
Hamad, Qusay Shihab, Samma, Hussein, Suandi, Shahrel Azmin
Breast cancer detection based on pre-trained convolution neural network (CNN) has gained much interest among other conventional computer-based systems. In the past few years, CNN technology has been the most promising way to find cancer in mammogram scans. In this paper, the effect of layer freezing in a pre-trained CNN is investigated for breast cancer detection by classifying mammogram images as benign or malignant. Different VGG19 scenarios have been examined based on the number of convolution layer blocks that have been frozen. There are a total of six scenarios in this study. The primary benefits of this research are twofold: it improves the model's ability to detect breast cancer cases and it reduces the training time of VGG19 by freezing certain layers.To evaluate the performance of these scenarios, 1693 microbiological images of benign and malignant breast cancers were utilized. According to the reported results, the best recognition rate was obtained from a frozen first block of VGG19 with a sensitivity of 95.64 %, while the training of the entire VGG19 yielded 94.48%.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Asia > Middle East > Saudi Arabia > Eastern Province > Dhahran (0.04)
- Asia > Middle East > Iraq > Baghdad Governorate > Baghdad (0.04)
- Asia > Malaysia > Penang > Nibong Tebal (0.04)
AI detects woman's breast cancer after routine screening missed it: 'Deeply grateful'
There are less obvious early signs of the disease that all women should be aware of -- here's what to know. A U.K. woman is thanking artificial intelligence for saving her life. Sheila Tooth of Littlehampton, West Sussex, had her breast cancer successfully detected by AI after routine testing came back "normal," according to a report by SWNS. Tooth, 68, was told she was clear of breast cancer after her last mammogram was reviewed by two radiologists. Her mammogram was then analyzed by an AI system, Mammography Intelligent Assessment, as part of a system being tested by University Hospitals Sussex.
- Europe > United Kingdom > England > West Sussex (0.27)
- North America > United States > Texas (0.05)
Multi-Tiered Self-Contrastive Learning for Medical Microwave Radiometry (MWR) Breast Cancer Detection
Galazis, Christoforos, Wu, Huiyi, Goryanin, Igor
Breast cancer, marked by the uncontrolled and rapid growth of cells due to genetic mutations, significantly impacts global health, as it records one of the highest incidence rates of cancer. In 2020 alone, it was estimated to account for 2.3 million new cases, becoming the primary cause of death among women with nearly 700,000 deaths [1]. Disturbingly, future forecasts suggest a continued rise in both the occurrence and death rates associated with breast cancer [2]. The pivotal role of early detection in reducing mortality rates and reducing the healthcare load cannot be overstated. In this context, Microwave Radiometry (MWR) emerges as a promising imaging modality that passively captures the natural microwave emissions of human tissues [3]. Its utility spans a broad spectrum of clinical areas, including but not limited to, the breasts [3, 4, 5, 6], brain [7, 8], lungs [9], veins [10], and musculoskeletal structures [11]. Within the domain of breast cancer screening, MWR leverages the fact that cancerous tissues, due to their increased metabolic rate, emit more heat than normal tissue [4].
- Europe > United Kingdom > England > Greater London > London (0.04)
- Asia > Japan > Kyūshū & Okinawa > Okinawa (0.04)
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.04)
- (2 more...)
Mammo-Clustering:A Weakly Supervised Multi-view Global-Local Context Clustering Network for Detection and Classification in Mammography
Yang, Shilong, Zhang, Chulong, Zang, Qi, Yu, Juan, Zeng, Liang, Luo, Xiao, Xing, Yexuan, Pan, Xin, Li, Qi, Liang, Xiaokun, Xie, Yaoqin
Breast cancer has long posed a significant threat to women's health, making early screening crucial for mitigating its impact. However, mammography, the preferred method for early screening, faces limitations such as the burden of double reading by radiologists, challenges in widespread adoption in remote and underdeveloped areas, and obstacles in intelligent early screening development due to data constraints. To address these challenges, we propose a weakly supervised multi-view mammography early screening model for breast cancer based on context clustering. Context clustering, a feature extraction structure that is neither CNN nor transformer, combined with multi-view learning for information complementation, presents a promising approach. The weak supervision design specifically addresses data limitations. Our model achieves state-of-the-art performance with fewer parameters on two public datasets, with an AUC of 0.828 on the Vindr-Mammo dataset and 0.805 on the CBIS-DDSM dataset. Our model shows potential in reducing the burden on doctors and increasing the feasibility of breast cancer screening for women in underdeveloped regions.
- North America (0.14)
- Asia > China > Guangdong Province > Shenzhen (0.05)
- Asia > China > Shandong Province > Qingdao (0.04)
- Health & Medicine > Therapeutic Area > Oncology > Breast Cancer (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)